321 research outputs found

    Chapter 20: What do interviewers learn? Changes in interview length and interviewer behaviors over the field period. Appendix 20

    Get PDF
    Appendix 20A Full Model Coefficients and Standard Errors Predicting Count of Questions with Individual Interviewer Behaviors, Two-level Multilevel Poisson Models with Number of Questions Asked as Exposure Variable, WLT1 and WLT2 Analytic strategyTable A20A.1 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Exact Question Reading with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.2 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Nondirective Probes with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.3 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Adequate Verification with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.4 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Appropriate Clarification with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.5 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Appropriate Feedback with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.6 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Stuttering During Question Reading with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.7 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Disfluencies with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.8 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Pleasant Talk with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.9 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Any Task-Related Feedback with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.10 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Laughter with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.11 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Minor Changes in Question Reading with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.12 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Major Changes in Question Reading with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.13 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Directive Probes with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.14 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Inadequate Verification with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.15 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Interruptions with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Appendix 20B Full Model Coefficients and Standard Errors Predicting Interview Length with Sets of Interviewer Behaviors, Two-level Multilevel Linear Models, WLT1 and WLT2 Table A20B.1 Coefficients and Standard Errors from Multilevel Linear Regression Models Predicting Total Duration, No Interviewer Behaviors, WLT1 and WLT2 Table A20B.2 Coefficients and Standard Errors from Multilevel Linear Regression Models Predicting Total Duration, Including Standardized Interviewer Behaviors, WLT1 and WLT2 Table A20B.3 Coefficients and Standard Errors from Multilevel Linear Regression Models Predicting Total Duration, Including Inefficiency Interviewer Behaviors, WLT1 and WLT2 Table A20B.4 Coefficients and Standard Errors from Multilevel Linear Regression Models Predicting Total Duration, Including Nonstandardized Interviewer Behaviors, WLT1 and WLT2 Table A20B.5 Coefficients and Standard Errors from Multilevel Linear Regression Models Predicting Total Duration, Including All Interviewer Behaviors, WLT1 and WLT2 Appendix 20C Mediation Models for Each Individual Interviewer Behavior Table A20C.1 Indirect, Direct And Total Effect of each Interviewer Behavior on Interview Length through Interview Order, Work and Leisure Today 1 Table A20C.2 Indirect, Direct And Total Effect of each Interviewer Behavior on Interview Length through Interview Order, Work and Leisure Today

    Unpacking the black box of survey costs

    Get PDF
    Survey costs are a critically important input to and constraint on the quality of data collected from surveys. Much about survey costs is unknown, leading to lack of understanding of the drivers of survey costs, the relationship between survey costs and survey errors, and difficulty in justifying the importance of survey data versus other available administrative or organic data. This commentary outlines a recently developed typology for survey costs, illustrates this typology using methodological articles that report on costs in pharmacy surveys, and provides recommendations for research on the relationship between fixed and variable costs as a major area for further reporting and research, as well as the relationship between costs and errors

    Comments on “How Errors Cumulate: Two Examples” by Roger Tourangeau

    Get PDF
    This paper provides a discussion of the Tourangeau (2019) Morris Hansen Lecture paper. I address issues related to compounding errors in web surveys and the relationship between nonresponse and measurement errors. I provide a potential model for understanding when error sources in nonprobability web surveys may compound or counteract one other. I also provide three conceptual models that help explicate the joint relationship between nonresponse and measurement errors. Tourangeau’s paper provides two interesting case studies about the role of multiple error sources in survey data. The first case study is one in which errors occur at different stages of the representation process—errors first occur when creating a potential sample frame, then may be amplified when selecting sampled persons, possibly because of self-selection, and then are exacerbated with an individual’s decision to participate. The second case study has to do with situations where different error sources may influence each other and, in particular, the relationship between nonresponse error and various measurement error outcomes

    Assessing Potential Errors in Level-of-Effort Paradata using GPS Data

    Get PDF
    Surveys are a critical resource for social, economic, and health research. The ability to efficiently collect these data and develop accurate post-survey adjustments depends upon reliable data about effort required to recruit sampled units. Level-of-effort paradata are data generated by interviewers during the process of collecting data in surveys. These data are often used as predictors in nonresponse adjustment models or to guide data collection efforts. However, recent research has found that these data may include measurement errors, which would lead to inaccurate decisions in the field or reduced effectiveness for adjustment purposes (Biemer, Chen, & Wang, 2013; West, 2013). In order to assess whether errors occur in level-of-effort paradata for in-person surveys, we introduce a new source of data – Global Positioning System (GPS) data generated by smartphones carried by interviewers. We examine the quality of the GPS data. We also link the GPS data with the interviewer-reported call records in order to identify potential errors in the call records. Specifically, we examine the question of whether there may be missing call records. Given the lack of a gold standard, we perform a sensitivity analysis under various assumptions to see how these would change our conclusions

    A Comparison of Fully Labeled and Top-Labeled Grid Question Formats

    Get PDF
    The grid question format is common in mail and web surveys. In this format, a single question stem introduces a set of items, which are listed in rows of a table underneath the question stem. The table’s columns contain the response options, usually only listed at the top, with answer spaces arrayed below and aligned with the items (Dillman et al. 2014).This format is efficient for respondents; they do not have to read the full question stem and full set of response options for every item in the grid. Likewise, it is space efficient for the survey researcher, which reduces printing and shipping costs in mail surveys and scrolling in web surveys. However, grids also complicate the response task by introducing fairly complex groupings of information. To answer grid items, respondents have to connect disparate pieces of information in space by locating the position on the page or screen where the proper row (the item prompt) intersects with the proper column (the response option). The difficulty of this task increases when the respondent has to traverse the largest distances to connect items to response option labels (down and right in the grid) (Couper 2008; Kaczmirek 2011).This spatial connection task has to be conducted while remembering the shared question stem, perhaps after reading and answering multiple items. As a result, grid items are prone to high rates of item nonresponse, straightlining, and breakoffs (Couper et al. 2013; Tourangeau et al. 2004). One way to possibly ease the burdens of grids in mail surveys is to repeat the response option labels in each row next to their corresponding answer spaces (Dillman 1978). Including response option labels near the answer spaces eliminates the need for vertical processing, allowing respondents to focus only on processing horizontally. However, fully labeling the answer spaces yields a more busy, dense display overall, which one can speculate might intimidate or overwhelm some respondents, leading them to skip the grid entirely. In this chapter we report the results of a series of experimental comparisons of fully labeled versus top-labeled grid formats from national probability mail survey, a convenience sample of students in a paper-and-pencil survey, and a convenience sample in a web-based eye-tracking laboratory study. For each experiment we compare mean responses, inter-item correlations, item nonresponse rates, and straightlining. In addition, for the eye-tracking experiment we also examine whether the different grid designs impacted how respondents visually processed the grid items. For two of the experiments, we conduct subgroup analyses to assess whether the effects of the grids differed for high and low cognitive ability respondents. Our experiments are conducted using both attitude and behavior questions covering a wide variety of question topics and using a variety of types of response scales

    The effect of emphasis in telephone survey questions on survey measurement quality

    Get PDF
    Questionnaire design texts commonly recommend emphasizing important words, including capitalization or underlining, to promote their processing by the respondent. In self-administered surveys, respondents can see the emphasis, but in an interviewer-administered survey, emphasis has to be communicated to respondents through audible signals. We report the results of experiments in two US telephone surveys in which telephone survey questions were presented to interviewers either with or without emphasis. We examine whether emphasis changes substantive answers to survey questions, whether interviewers actually engage in verbal emphasis behaviors, and whether emphasis changes the interviewer- respondent interaction. We find surprisingly little effect of the question emphasis on any outcome, with the primary effects on vocal intonation and the interviewer-respondent interaction. Thus, there is no evidence here to suggest that questionnaire designers should use emphasis in interviewer-administered questionnaires to improve data quality. As the first study on this topic, we suggest many opportunities for future research

    A comparison of frequency of alcohol and marijuana use using short message service surveying and survey questionnaires among homeless youth

    Get PDF
    Background: There are several benefits to using short message service surveying (SMS) to gather data on substance use from homeless youth, including capturing data “in the moment” and verifying the timing of one behavior relative to another. Though SMS is a valuable data collection tool with highly mobile populations that otherwise are difficult to longitudinally sample, the reliability of SMS compared with surveys is largely unknown with homeless youth. Examining the reliability of SMS is important because these data can provide a more nuanced understanding of the relationships between various risk behaviors, which may lead to better intervention strategies with these youth. Objectives: We compared past 30-day survey and SMS data for youth’s alcohol and marijuana use. Methods: Interviewed 150 homeless youth (51% female) using surveys and SMS. Results: Past 30-day survey and SMS data revealed moderately strong correlations for alcohol (rs = .563) and marijuana (rs = .564). Regression analysis revealed that independent variables were similarly associated with alcohol and marijuana use when comparing survey and SMS data with two exceptions: heterosexual youth reported less alcohol use in SMS data compared to survey data (β = −.212; p \u3c .05 vs. β = −.006; p \u3e .05, respectively) and youth whose parents had alcohol problems reported less marijuana use in survey data compared to SMS data (β = −.277; p \u3c .01 vs. β = −.150; p \u3e .05, respectively). Conclusion: Findings indicate SMS and surveys are both reliable methods of gathering data from homeless youth on substance use

    Report to Congress on Waivers Granted Under the Elementary and Secondary Education Act

    Get PDF
    This is the third annual report to Congress on waivers granted by the U.S. Department of Education, mandated under section 14401(e)(4) of the Elementary and Secondary Education Act (ESEA). Three education laws passed in 1994 — the Goals 2000: Educate America Act, the School-to-Work Opportunities Act, and the reauthorized ESEA — allow the Secretary of Education to grant waivers of certain requirements of federal education programs in cases where a waiver will likely contribute to improved teaching and learning. States and school districts use the waiver authorities to adapt federal programs and use federal funds in ways that address their local needs. The waiver authorities provide additional flexibility to states and school districts in exchange for increased accountability for improving student achievement. The law requires that waiver applicants describe how a waiver would improve instruction and academic performance, and that the waivers conform to the underlying intent and purposes of the affected programs. This report contains five sections. Section I gives an overview of waivers requested and granted from the establishment of the waiver authorities in 1994 through September 30, 1999. Section II provides details about the focus of the waivers that have been granted. Section III examines the progress school districts and states have made under waivers that have been effective for at least two years, as reported by states to the U. S. Department of Education. Section IV reviews the federal and state roles in the administration of the waiver authorities, and Section V contains some conclusions about how waivers contribute to the broader effort to improve teaching and learning for all students

    Report to Congress on Waivers Granted Under the Elementary and Secondary Education Act

    Get PDF
    This is the third annual report to Congress on waivers granted by the U.S. Department of Education, mandated under section 14401(e)(4) of the Elementary and Secondary Education Act (ESEA). Three education laws passed in 1994 — the Goals 2000: Educate America Act, the School-to-Work Opportunities Act, and the reauthorized ESEA — allow the Secretary of Education to grant waivers of certain requirements of federal education programs in cases where a waiver will likely contribute to improved teaching and learning. States and school districts use the waiver authorities to adapt federal programs and use federal funds in ways that address their local needs. The waiver authorities provide additional flexibility to states and school districts in exchange for increased accountability for improving student achievement. The law requires that waiver applicants describe how a waiver would improve instruction and academic performance, and that the waivers conform to the underlying intent and purposes of the affected programs. This report contains five sections. Section I gives an overview of waivers requested and granted from the establishment of the waiver authorities in 1994 through September 30, 1999. Section II provides details about the focus of the waivers that have been granted. Section III examines the progress school districts and states have made under waivers that have been effective for at least two years, as reported by states to the U. S. Department of Education. Section IV reviews the federal and state roles in the administration of the waiver authorities, and Section V contains some conclusions about how waivers contribute to the broader effort to improve teaching and learning for all students

    A Comparison of Full and Quasi Filters for Autobiographical Questions

    Get PDF
    Some survey questions do not apply to all respondents. How to design these questions for both eligible and ineligible respondents is unclear. This article compares full filter (FF) and quasi filter (QF) designs for autobiographical questions in mail surveys. Using data from National Health, Wellbeing, and Perspectives Study, we examine the effect of type of filter on item nonresponse rates, response errors, and response distributions. We find that QF questions are more confusing to respondents, resulting in higher rates of item nonresponse and response errors than FF questions. Additionally, FF questions more successfully identify ineligible respondents, bringing estimates closer to national benchmarks. We recommend that survey designers use FF designs rather than QF designs for autobiographical questions in mail surveys
    • …
    corecore